19 research outputs found

    Cost minimization for unstable concurrent products in multi-stage production line using queueing analysis

    Get PDF
    This research and resulting contribution are results of Assumption University of Thailand. The university partially supports financially the publication.Purpose: The paper copes with the queueing theory for evaluating a muti-stage production line process with concurrent goods. The intention of this article is to evaluate the efficiency of products assembly in the production line. Design/Methodology/Approach: To elevate the efficiency of the assembly line it is required to control the performance of individual stations. The arrival process of concurrent products is piled up before flowing to each station. All experiments are based on queueing network analysis. Findings: The performance analysis for unstable concurrent sub-items in the production line is discussed. The proposed analysis is based on the improvement of the total sub-production time by lessening the queue time in each station. Practical implications: The collected data are number of workers, incoming and outgoing sub-products, throughput rate, and individual station processing time. The front loading place unpacks product items into concurrent sub-items by an operator and automatically sorts them by RFID tag or bar code identifiers. Experiments of the work based on simulation are compared and validated with results from real approximation. Originality/Value: It is an alternative improvement to increase the efficiency of the operation in each station with minimum costs.peer-reviewe

    Evaluation of load balancing approaches for Erlang concurrent application in cloud systems

    Get PDF
    Cloud system accommodates the computing environment including PaaS (platform as a service), SaaS (software as a service), and IaaS (infrastructure as service) that enables the services of cloud systems.  Cloud system allows multiple users to employ computing services through browsers, which reflects an alternative service model that alters the local computing workload to a distant site. Cloud virtualization is another characteristic of the clouds that deliver virtual computing services and imitate the functionality of physical computing resources. It refers to an elastic load balancing management that provides the flexible model of on-demand services. The virtualization allows organizations to improve high levels of reliability, accessibility, and scalability by having a capability to execute applications on multiple resources simultaneously. In this paper we use a queuing model to consider a flexible load balancing and evaluate performance metrics such as mean queue length, throughput, mean waiting time, utilization, and mean traversal time. The model is aware of the arrival of concurrent applications with an Erlang distribution. Simulation results regarding performance metrics are investigated. Results point out that in Cloud systems both the fairness and load balancing are to be significantly considered

    Approximation of regression-based fault minimization for network traffic

    Get PDF
    This research associates three distinct approaches for computer network traffic prediction. They are the traditional stochastic gradient descent (SGD) using a few random samplings instead of the complete dataset for each iterative calculation, the gradient descent algorithm (GDA) which is a well-known optimization approach in Deep Learning, and the proposed method. The network traffic is computed from the traffic load (data and multimedia) of the computer network nodes via the Internet. It is apparent that the SGD is a modest iteration but can conclude suboptimal solutions. The GDA is a complicated one, can function more accurate than the SGD but difficult to manipulate parameters, such as the learning rate, the dataset granularity, and the loss function. Network traffic estimation helps improve performance and lower costs for various applications, such as an adaptive rate control, load balancing, the quality of service (QoS), fair bandwidth allocation, and anomaly detection. The proposed method confirms optimal values out of parameters using simulation to compute the minimum figure of specified loss function in each iteration

    Proposed algorithm for image classification using regression-based pre-processing and recognition models

    Get PDF
    Image classification algorithms can categorise pixels regarding to image attributes with the pre-processing of learner’s trained samples. The precision and classification accuracy are complex to compute due to the variable size of pixels (different image width and height) and numerous characteristics of image per se. This research proposes an image classification algorithm based on regression-based pre-processing and the recognition models. The proposed algorithm focuses on an optimization of pre-processing results such as accuracy and precision. To evaluate and validate, recognition model is mapped in order to cluster the digital images which are developing the problem of a multidimensional state space. Simulation results show that compared to existing algorithms, the proposed method outperforms with the optimal number of precision and accuracy in classification as well as results higher matching percentage based upon image analytics

    Evaluation of a Multiple Regression Model for Noisy and Missing Data

    Get PDF
    The standard data collection problems may involve noiseless data while on the other hand large organizations commonly experience noisy and missing data, probably concerning data collected from individuals. As noisy and missing data will be significantly worrisome for occasions of the vast data collection then the investigation of different filtering techniques for big data environment would be remarkable. A multiple regression model where big data is employed for experimenting will be presented. Approximation for datasets with noisy and missing data is also proposed. The statistical root mean squared error (RMSE) associated with correlation coefficient (COEF) will be analyzed to prove the accuracy of estimators. Finally, results predicted by massive online analysis (MOA) will be compared to those real data collected from the following different time. These theoretical predictions with noisy and missing data estimation by simulation, revealing consistency with the real data are illustrated. Deletion mechanism (DEL) outperforms with the lowest average percentage of error

    Estimation of regression-based model with bulk noisy data

    Get PDF
    The bulk noise has been provoking a contributed data due to a communication network with a tremendously low signal to noise ratio. An appreciated method for revising massive noise of individuals through information theory is widely discussed. One of the practical applications of this approach for bulk noise estimation is analyzed using intelligent automation and machine learning tools, dealing the case of bulk noise existence or nonexistence. A regression-based model is employed for the investigation and experiment. Estimation for the practical case with bulk noisy datasets is proposed. The proposed method applies slice-and-dice technique to partition a body of datasets down into slighter portions so that it can be carried out. The average error, correlation, absolute error and mean square error are computed to validate the estimation. Results from massive online analysis will be verified with data collected in the following period. In many cases, the prediction with bulk noisy data through MOA simulation reveals Random Imputation minimizes the average error

    Granularity analysis of classification and estimation for complex datasets with MOA

    Get PDF
    Dispersed and unstructured datasets are substantial parameters to realize an exact amount of the required space. Depending upon the size and the data distribution, especially, if the classes are significantly associating, the level of granularity to agree a precise classification of the datasets exceeds. The data complexity is one of the major attributes to govern the proper value of the granularity, as it has a direct impact on the performance. Dataset classification exhibits the vital step in complex data analytics and designs to ensure that dataset is prompt to be efficiently scrutinized. Data collections are always causing missing, noisy and out-of-the-range values. Data analytics which has not been wisely classified for problems as such can induce unreliable outcomes. Hence, classifications for complex data sources help comfort the accuracy of gathered datasets by machine learning algorithms. Dataset complexity and pre-processing time reflect the effectiveness of individual algorithm. Once the complexity of datasets is characterized then comparatively simpler datasets can further investigate with parallelism approach. Speedup performance is measured by the execution of MOA simulation. Our proposed classification approach outperforms and improves granularity level of complex datasets

    Proposed classification for eLearning data analytics with MOA

    Get PDF
    Elearning education has developed a crucial factor in the educational organization. With the situation of declining student size, elearning has to offer more cross-departmental and multi-disciplinary courses for individual needs to go over “one-size-fits-all” traditional model. Elearning data analytics which has not been professionally classified cannot produce reliable results. Classifications for elearning data help comfort the accuracy of outcomes and reducible pre-processing time. This research proposes a practical model for individual learning and personality. The proposed model based on data from the LMS classifies both the student preferences and personalities. The model helps design future curricula to suit student personalities, which intangibly assists them to be efficient in the study practice. Performance of the proposed classification is evaluated by using MOA software. It outperforms and improves the accuracy of complex elearning datasets. Besides, the results indicate an achievement in the students' study time after applying the association rule model on the elearning

    Factors Impacting Online Learning Usage during Covid-19 Pandemic Among Sophomores in Sichuan Private Universities

    Get PDF
    Purpose: This study aims to examine factors impacting online learning usage among students in Sichuan private universities, China. The variables used to construct the conceptual framework are perceived ease of use, perceived usefulness, information quality, system quality, service quality, attitude towards using, satisfaction, behavioral intention and actual use. Research design, data and methodology: The quantitative approach (n=500) was conducted via online questionnaire, using judgmental sampling, quota sampling and convenience sampling. Before processing the data collection, content validity was reserved by index of item objective congruence (IOC). Pilot study of 40 samples was approved by Cronbach’s Alpha reliability test. Afterwards, the data was analyzed in SPSS using descriptive statistics, confirmatory factor analysis (CFA) and structural equation modeling (SEM). Results: The results revealed that satisfaction had the strongest significant impact on behavioral intention, followed by perceived usefulness on attitude toward using, service quality on behavioral intention, behavioral intention on actual use, information quality on behavioral intention, perceived ease of use on attitude toward using and attitude toward using on behavioral intention. On the other hand, the relationship between system quality and behavioral intention was not significant. Conclusions: Academic practitioners were recommended to encourage online learning usage among students by developing better online learning system, technical support service and learning experience which led to successful adoption and learning effectiveness of students in higher education

    Evaluation of graphic effects embedded image compression

    Get PDF
    A fundamental factor of digital image compression is the conversion processes. The intention of this process is to understand the shape of an image and to modify the digital image to a grayscale configuration where the encoding of the compression technique is operational. This article focuses on an investigation of compression algorithms for images with artistic effects. A key component in image compression is how to effectively preserve the original quality of images. Image compression is to condense by lessening the redundant data of images in order that they are transformed cost-effectively. The common techniques include discrete cosine transform (DCT), fast Fourier transform (FFT), and shifted FFT (SFFT). Experimental results point out compression ratio between original RGB images and grayscale images, as well as comparison. The superior algorithm improving a shape comprehension for images with grahic effect is SFFT technique
    corecore